Joint Multimodal Learning with Deep Generative Models

نویسندگان

  • Masahiro Suzuki
  • Kotaro Nakayama
  • Yutaka Matsuo
چکیده

We investigate deep generative models that can exchange multiple modalities bidirectionally, e.g., generating images from corresponding texts and vice versa. Recently, some studies handle multiple modalities on deep generative models, such as variational autoencoders (VAEs). However, these models typically assume that modalities are forced to have a conditioned relation, i.e., we can only generate modalities in one direction. To achieve our objective, we should extract a joint representation that captures high-level concepts among all modalities and through which we can exchange them bi-directionally. As described herein, we propose a joint multimodal variational autoencoder (JMVAE), in which all modalities are independently conditioned on joint representation. In other words, it models a joint distribution of modalities. Furthermore, to be able to generate missing modalities from the remaining modalities properly, we develop an additional method, JMVAE-kl, that is trained by reducing the divergence between JMVAE’s encoder and prepared networks of respective modalities. Our experiments show that our proposed method can obtain appropriate joint representation from multiple modalities and that it can generate and reconstruct them more properly than conventional VAEs. We further demonstrate that JMVAE can generate multiple modalities bidirectionally.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improved Multimodal Deep Learning with Variation of Information

Deep learning has been successfully applied to multimodal representation learning problems, with a common strategy to learning joint representations that are shared across multiple modalities on top of layers of modality-specific networks. Nonetheless, there still remains a question how to learn a good association between data modalities; in particular, a good generative model of multimodal dat...

متن کامل

Multimodal Learning with Deep Boltzmann Machines

A Deep Boltzmann Machine is described for learning a generative model of data that consists of multiple and diverse input modalities. The model can be used to extract a unified representation that fuses modalities together. We find that this representation is useful for classification and information retrieval tasks. The model works by learning a probability density over the space of multimodal...

متن کامل

Variational methods for Conditional Multimodal Learning: Generating Human Faces from Attributes

Prior to this decade, the field of computer vision was primarily focused around hand-crafted feature extraction methods used in conjunction with discriminative models for specific tasks such as object recognition, detection/localization, tracking etc. A generative image understanding was neither within reach nor the prime concern of the period. In this paper, we address the following problem: G...

متن کامل

Learning Representations for Multimodal Data with Deep Belief Nets

We propose a Deep Belief Network architecture for learning a joint representation of multimodal data. The model defines a probability distribution over the space of multimodal inputs and allows sampling from the conditional distributions over each data modality. This makes it possible for the model to create a multimodal representation even when some data modalities are missing. Our experimenta...

متن کامل

Semi-supervised Multimodal Learning with Deep Generative Models

In recent years, deep neural networks are used mainly as discriminators of multimodal learning. We should have large amounts of labeled data for training them, but obtaining such data is difficult because it requires much labor to label inputs. Therefore, semi-supervised learning, which improves the discriminator performance using unlabeled data, is important. Among semi-supervised learning, me...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1611.01891  شماره 

صفحات  -

تاریخ انتشار 2016